Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 1.103
Filter
1.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20245449

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic had a major impact on global health and was associated with millions of deaths worldwide. During the pandemic, imaging characteristics of chest X-ray (CXR) and chest computed tomography (CT) played an important role in the screening, diagnosis and monitoring the disease progression. Various studies suggested that quantitative image analysis methods including artificial intelligence and radiomics can greatly boost the value of imaging in the management of COVID-19. However, few studies have explored the use of longitudinal multi-modal medical images with varying visit intervals for outcome prediction in COVID-19 patients. This study aims to explore the potential of longitudinal multimodal radiomics in predicting the outcome of COVID-19 patients by integrating both CXR and CT images with variable visit intervals through deep learning. 2274 patients who underwent CXR and/or CT scans during disease progression were selected for this study. Of these, 946 patients were treated at the University of Pennsylvania Health System (UPHS) and the remaining 1328 patients were acquired at Stony Brook University (SBU) and curated by the Medical Imaging and Data Resource Center (MIDRC). 532 radiomic features were extracted with the Cancer Imaging Phenomics Toolkit (CaPTk) from the lung regions in CXR and CT images at all visits. We employed two commonly used deep learning algorithms to analyze the longitudinal multimodal features, and evaluated the prediction results based on the area under the receiver operating characteristic curve (AUC). Our models achieved testing AUC scores of 0.816 and 0.836, respectively, for the prediction of mortality. © 2023 SPIE.

2.
Proceedings of SPIE - The International Society for Optical Engineering ; 12602, 2023.
Article in English | Scopus | ID: covidwho-20245409

ABSTRACT

Nowadays, with the outbreak of COVID-19, the prevention and treatment of COVID-19 has gradually become the focus of social disease prevention, and most patients are also more concerned about the symptoms. COVID-19 has symptoms similar to the common cold, and it cannot be diagnosed based on the symptoms shown by the patient, so it is necessary to observe medical images of the lungs to finally determine whether they are COVID-19 positive. As the number of patients with symptoms similar to pneumonia increases, more and more medical images of the lungs need to be generated. At the same time, the number of physicians at this stage is far from meeting the needs of patients, resulting in patients unable to detect and understand their own conditions in time. In this regard, we have performed image augmentation, data cleaning, and designed a deep learning classification network based on the data set of COVID-19 lung medical images. accurate classification judgment. The network can achieve 95.76% classification accuracy for this task through a new fine-tuning method and hyperparameter tuning we designed, which has higher accuracy and less training time than the classic convolutional neural network model. © 2023 SPIE.

3.
Proceedings of SPIE - The International Society for Optical Engineering ; 12626, 2023.
Article in English | Scopus | ID: covidwho-20245242

ABSTRACT

In 2020, the global spread of Coronavirus Disease 2019 exposed entire world to a severe health crisis. This has limited fast and accurate screening of suspected cases due to equipment shortages and and harsh testing environments. The current diagnosis of suspected cases has benefited greatly from the use of radiographic brain imaging, also including X-ray and scintigraphy, as a crucial addition to screening tests for new coronary pneumonia disease. However, it is impractical to gather enormous volumes of data quickly, which makes it difficult for depth models to be trained. To solve these problems, we obtained a new dataset by data augmentation Mixup method for the used chest CT slices. It uses lung infection segmentation (Inf-Net [1]) in a deep network and adds a learning framework with semi-supervised to form a Mixup-Inf-Net semi-supervised learning framework model to identify COVID-19 infection area from chest CT slices. The system depends primarily on unlabeled data and merely a minimal amount of annotated data is required;therefore, the unlabeled data generated by Mixup provides good assistance. Our framework can be used to improve improve learning and performance. The SemiSeg dataset and the actual 3D CT images that we produced are used in a variety of tests, and the analysis shows that Mixup-Inf-Net semi-supervised outperforms most SOTA segmentation models learning framework model in this study, which also enhances segmentation performance. © 2023 SPIE.

4.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12467, 2023.
Article in English | Scopus | ID: covidwho-20244646

ABSTRACT

It is important to evaluate medical imaging artificial intelligence (AI) models for possible implicit discrimination (ability to distinguish between subgroups not related to the specific clinical task of the AI model) and disparate impact (difference in outcome rate between subgroups). We studied potential implicit discrimination and disparate impact of a published deep learning/AI model for the prediction of ICU admission for COVID-19 within 24 hours of imaging. The IRB-approved, HIPAA-compliant dataset contained 8,357 chest radiography exams from February 2020-January 2022 (12% ICU admission within 24 hours) and was separated by patient into training, validation, and test sets (64%, 16%, 20% split). The AI output was evaluated in two demographic categories: sex assigned at birth (subgroups male and female) and self-reported race (subgroups Black/African-American and White). We failed to show statistical evidence that the model could implicitly discriminate between members of subgroups categorized by race based on prediction scores (area under the receiver operating characteristic curve, AUC: median [95% confidence interval, CI]: 0.53 [0.48, 0.57]) but there was some marginal evidence of implicit discrimination between members of subgroups categorized by sex (AUC: 0.54 [0.51, 0.57]). No statistical evidence for disparate impact (DI) was observed between the race subgroups (i.e. the 95% CI of the ratio of the favorable outcome rate between two subgroups included one) for the example operating point of the maximized Youden index but some evidence of disparate impact to the male subgroup based on sex was observed. These results help develop evaluation of implicit discrimination and disparate impact of AI models in the context of decision thresholds © COPYRIGHT SPIE. Downloading of the is permitted for personal use only.

5.
ACM International Conference Proceeding Series ; : 419-426, 2022.
Article in English | Scopus | ID: covidwho-20244497

ABSTRACT

The size and location of the lesions in CT images of novel corona virus pneumonia (COVID-19) change all the time, and the lesion areas have low contrast and blurred boundaries, resulting in difficult segmentation. To solve this problem, a COVID-19 image segmentation algorithm based on conditional generative adversarial network (CGAN) is proposed. Uses the improved DeeplabV3+ network as a generator, which enhances the extraction of multi-scale contextual features, reduces the number of network parameters and improves the training speed. A Markov discriminator with 6 fully convolutional layers is proposed instead of a common discriminator, with the aim of focusing more on the local features of the CT image. By continuously adversarial training between the generator and the discriminator, the network weights are optimised so that the final segmented image generated by the generator is infinitely close to the ground truth. On the COVID-19 CT public dataset, the area under the curve of ROC, F1-Score and dice similarity coefficient achieved 96.64%, 84.15% and 86.14% respectively. The experimental results show that the proposed algorithm is accurate and robust, and it has the possibility of becoming a safe, inexpensive, and time-saving medical assistant tool in clinical diagnosis, which provides a reference for computer-aided diagnosis. © 2022 ACM.

6.
ACM International Conference Proceeding Series ; 2022.
Article in English | Scopus | ID: covidwho-20244307

ABSTRACT

This paper proposes a deep learning-based approach to detect COVID-19 infections in lung tissues from chest Computed Tomography (CT) images. A two-stage classification model is designed to identify the infection from CT scans of COVID-19 and Community Acquired Pneumonia (CAP) patients. The proposed neural model named, Residual C-NiN uses a modified convolutional neural network (CNN) with residual connections and a Network-in-Network (NiN) architecture for COVID-19 and CAP detection. The model is trained with the Signal Processing Grand Challenge (SPGC) 2021 COVID dataset. The proposed neural model achieves a slice-level classification accuracy of 93.54% on chest CT images and patient-level classification accuracy of 86.59% with class-wise sensitivity of 92.72%, 55.55%, and 95.83% for COVID-19, CAP, and Normal classes, respectively. Experimental results show the benefit of adding NiN and residual connections in the proposed neural architecture. Experiments conducted on the dataset show significant improvement over the existing state-of-the-art methods reported in the literature. © 2022 ACM.

7.
Electronics ; 12(11):2378, 2023.
Article in English | ProQuest Central | ID: covidwho-20244207

ABSTRACT

This paper presents a control system for indoor safety measures using a Faster R-CNN (Region-based Convolutional Neural Network) architecture. The proposed system aims to ensure the safety of occupants in indoor environments by detecting and recognizing potential safety hazards in real time, such as capacity control, social distancing, or mask use. Using deep learning techniques, the system detects these situations to be controlled, notifying the person in charge of the company if any of these are violated. The proposed system was tested in a real teaching environment at Rey Juan Carlos University, using Raspberry Pi 4 as a hardware platform together with an Intel Neural Stick board and a pair of PiCamera RGB (Red Green Blue) cameras to capture images of the environment and a Faster R-CNN architecture to detect and classify objects within the images. To evaluate the performance of the system, a dataset of indoor images was collected and annotated for object detection and classification. The system was trained using this dataset, and its performance was evaluated based on precision, recall, and F1 score. The results show that the proposed system achieved a high level of accuracy in detecting and classifying potential safety hazards in indoor environments. The proposed system includes an efficiently implemented software infrastructure to be launched on a low-cost hardware platform, which is affordable for any company, regardless of size or revenue, and it has the potential to be integrated into existing safety systems in indoor environments such as hospitals, warehouses, and factories, to provide real-time monitoring and alerts for safety hazards. Future work will focus on enhancing the system's robustness and scalability to larger indoor environments with more complex safety hazards.

8.
Proceedings of SPIE - The International Society for Optical Engineering ; 12567, 2023.
Article in English | Scopus | ID: covidwho-20244192

ABSTRACT

The COVID-19 pandemic has challenged many of the healthcare systems around the world. Many patients who have been hospitalized due to this disease develop lung damage. In low and middle-income countries, people living in rural and remote areas have very limited access to adequate health care. Ultrasound is a safe, portable and accessible alternative;however, it has limitations such as being operator-dependent and requiring a trained professional. The use of lung ultrasound volume sweep imaging is a potential solution for this lack of physicians. In order to support this protocol, image processing together with machine learning is a potential methodology for an automatic lung damage screening system. In this paper we present an automatic detection of lung ultrasound artifacts using a Deep Neural Network, identifying clinical relevant artifacts such as pleural and A-lines contained in the ultrasound examination taken as part of the clinical screening in patients with suspected lung damage. The model achieved encouraging preliminary results such as sensitivity of 94%, specificity of 81%, and accuracy of 89% to identify the presence of A-lines. Finally, the present study could result in an alternative solution for an operator-independent lung damage screening in rural areas, leading to the integration of AI-based technology as a complementary tool for healthcare professionals. © 2023 SPIE.

9.
IEEE Transactions on Radiation and Plasma Medical Sciences ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-20244069

ABSTRACT

Automatic lung infection segmentation in computed tomography (CT) scans can offer great assistance in radiological diagnosis by improving accuracy and reducing time required for diagnosis. The biggest challenges for deep learning (DL) models in segmenting infection region are the high variances in infection characteristics, fuzzy boundaries between infected and normal tissues, and the troubles in getting large number of annotated data for training. To resolve such issues, we propose a Modified U-Net (Mod-UNet) model with minor architectural changes and significant modifications in the training process of vanilla 2D UNet. As part of these modifications, we updated the loss function, optimization function, and regularization methods, added a learning rate scheduler and applied advanced data augmentation techniques. Segmentation results on two Covid-19 Lung CT segmentation datasets show that the performance of Mod-UNet is considerably better than the baseline U-Net. Furthermore, to mitigate the issue of lack of annotated data, the Mod-UNet is used in a semi-supervised framework (Semi-Mod-UNet) which works on a random sampling approach to progressively enlarge the training dataset from a large pool of unannotated CT slices. Exhaustive experiments on the two Covid-19 CT segmentation datasets and on a real lung CT volume show that the Mod-UNet and Semi-Mod-UNet significantly outperform other state-of-theart approaches in automated lung infection segmentation. IEEE

10.
2023 11th International Conference on Information and Education Technology, ICIET 2023 ; : 480-484, 2023.
Article in English | Scopus | ID: covidwho-20243969

ABSTRACT

In recent years, the COVID-19 has made it difficult for people to interact with each other face-to-face, but various kinds of social interactions are still needed. Therefore, we have developed an online interactive system based on the image processing method, that allows people in different places to merge the human region of two images onto the same image in real-time. The system can be used in a variety of situations to extend its interactive applications. The system is mainly based on the task of Human Segmentation in the CNN (convolution Neural Network) method. Then the images from different locations are transmitted to the computing server through the Internet. In our design, the system ensures that the CNN method can run in real-time, allowing both side users can see the integrated image to reach 30 FPS when the network is running smoothly. © 2023 IEEE.

11.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12469, 2023.
Article in English | Scopus | ID: covidwho-20242921

ABSTRACT

Medical Imaging and Data Resource Center (MIDRC) has been built to support AI-based research in response to the COVID-19 pandemic. One of the main goals of MIDRC is to make data collected in the repository ready for AI analysis. Due to data heterogeneity, there is a need to standardize data and make data-mining easier. Our study aims to stratify imaging data according to underlying anatomy using open-source image processing tools. The experiments were performed using Google Colaboratory on computed tomography (CT) imaging data available from the MIDRC. We adopted the existing open-source tools to process CT series (N=389) to define the image sub-volumes according to body part classification, and additionally identified series slices containing specific anatomic landmarks. Cases with automatically identified chest regions (N=369) were then processed to automatically segment the lungs. In order to assess the accuracy of segmentation, we performed outlier analysis using 3D shape radiomics features extracted from the left and right lungs. Standardized DICOM objects were created to store the resulting segmentations, regions, landmarks and radiomics features. We demonstrated that the MIDRC chest CT collections can be enriched using open-source analysis tools and that data available in MIDRC can be further used to evaluate the robustness of publicly available tools. © 2023 SPIE.

12.
2023 6th International Conference on Information Systems and Computer Networks, ISCON 2023 ; 2023.
Article in English | Scopus | ID: covidwho-20242881

ABSTRACT

Coronavirus illness, which was initially diagnosed in 2019 but has propagated rapidly across the globe, has led to increased fatalities. According to professional physicians who examined chest CT scans, COVID-19 behaves differently than various viral cases of pneumonia. Even though the illness only recently emerged, a number of research investigations have been performed wherein the progression of the disease impacts mostly on the lungs are identified using thoracic CT scans. In this work, automated identification of COVID-19 is used by using machine learning classifier trained on more than 1000+ lung CT Scan images. As a result, immediate diagnosis of COVID-19, which is very much necessary in the opinion of healthcare specialists, is feasible. To improve detection accuracy, the feature extraction method are applied on regions of interests. Feature extraction approaches, including Discrete Wavelet Transform (DWT), Grey Level Cooccurrence Matrix (GLCM), Grey Level Run Length Matrix (GLRLM), and Grey-Level Size Zone Matrix (GLSZM) algorithms are used. Then the classification by using Support Vector Machines (SVM) is used. The classification accuracy is assessed by using precision, specificity, accuracy, sensitivity and F-score measures. Among all feature extraction methods, the GLCM approach has given the optimum classification accuracy of 95.6%. . © 2023 IEEE.

13.
IEEE Access ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-20242834

ABSTRACT

During the formation of medical images, they are easily disturbed by factors such as acquisition devices and tissue backgrounds, causing problems such as blurred image backgrounds and difficulty in differentiation. In this paper, we combine the HarDNet module and the multi-coding attention mechanism module to optimize the two stages of encoding and decoding to improve the model segmentation performance. In the encoding stage, the HarDNet module extracts medical image feature information to improve the segmentation network operation speed. In the decoding stage, the multi-coding attention module is used to extract both the position feature information and channel feature information of the image to improve the model segmentation effect. Finally, to improve the segmentation accuracy of small targets, the use of Cross Entropy and Dice combination function is proposed as the loss function of this algorithm. The algorithm has experimented on three different types of medical datasets, Kvasir-SEG, ISIC2018, and COVID-19CT. The values of JS were 0.7189, 0.7702, 0.9895, ACC were 0.8964, 0.9491, 0.9965, SENS were 0.7634, 0.8204, 0.9976, PRE were 0.9214, 0.9504, 0.9931. The experimental results showed that the model proposed in this paper achieved excellent segmentation results in all the above evaluation indexes, which can effectively assist doctors to diagnose related diseases quickly and improve the speed of diagnosis and patients’quality of life. Author

14.
ACM International Conference Proceeding Series ; : 12-21, 2022.
Article in English | Scopus | ID: covidwho-20242817

ABSTRACT

The global COVID-19 pandemic has caused a health crisis globally. Automated diagnostic methods can control the spread of the pandemic, as well as assists physicians to tackle high workload conditions through the quick treatment of affected patients. Owing to the scarcity of medical images and from different resources, the present image heterogeneity has raised challenges for achieving effective approaches to network training and effectively learning robust features. We propose a multi-joint unit network for the diagnosis of COVID-19 using the joint unit module, which leverages the receptive fields from multiple resolutions for learning rich representations. Existing approaches usually employ a large number of layers to learn the features, which consequently requires more computational power and increases the network complexity. To compensate, our joint unit module extracts low-, same-, and high-resolution feature maps simultaneously using different phases. Later, these learned feature maps are fused and utilized for classification layers. We observed that our model helps to learn sufficient information for classification without a performance loss and with faster convergence. We used three public benchmark datasets to demonstrate the performance of our network. Our proposed network consistently outperforms existing state-of-the-art approaches by demonstrating better accuracy, sensitivity, and specificity and F1-score across all datasets. © 2022 ACM.

15.
2023 3rd International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies, ICAECT 2023 ; 2023.
Article in English | Scopus | ID: covidwho-20242769

ABSTRACT

Monkeypox is a skin disease that spreadsfrom animals to people and then people to people, the class of the monkeypox is zoonotic and its genus are othopoxvirus. There is no special treatment for monkeypox but the monkeypox and smallpox symptoms are almost similar, so the antiviral drug developed for prevent from smallpox virus may be used for monkeypox Infected person, the Prevention of monkeypox is just like COVID-19 proper hand wash, Smallpox vaccine, keep away from infected person, used PPE kits. In this paper Deep learning is use for detection of monkeypox with the help of CNN model, The Original Images contains a total number of 228 images, 102 belongs to the Monkeypox class and the remaining 126 represents the normal. But in deep learning greater amount of data required, data augmentation is also applied on it after this the total number of images are 3192. A variety of optimizers have been used to find out the best result in this paper, a comparison is usedbased on Loss, Accuracy, AUC, F1 score, Validation loss, Validation accuracy, validation AUC, Validation F1 score of each optimizer. after comparing alloptimizer, the Adam optimizer gives the best result its total testing accuracy is 92.21%, total number of epochs used for testing is 100. With the help of deep learning model Doctors are easily detect the monkeypox virus with the single image of infected person. © 2023 IEEE.

16.
2022 OPJU International Technology Conference on Emerging Technologies for Sustainable Development, OTCON 2022 ; 2023.
Article in English | Scopus | ID: covidwho-20242650

ABSTRACT

Deep Convolutional Neural Networks are a form of neural network that can categorize, recognize, or separate images. The problem of COVID-19 detection has become the world's most complex challenge since 2019. In this research work, Chest X-Ray images are used to detect patients' Covid Positive or Negative with the help of pre-trained models: VGG16, InceptionV3, ResNet50, and InceptionResNetV2. In this paper, 821 samples are used for training, 186 samples for validation, and 184 samples are used for testing. Hybrid model InceptionResNetV2 has achieved overall maximum accuracy of 94.56% with a Recall value of 96% for normal CXR images, and a precision of 95.12% for Covid Positive images. The lowest accuracy was achieved by the ResNet50 model of 92.93% on the testing dataset, and a Recall of 93.93% was achieved for the normal images. Throughout the implementation process, it was discovered that factors like epoch had a considerable impact on the model's accuracy. Consequently, it is advised that the model be trained with a sufficient number of epochs to provide reliable classification results. The study's findings suggest that deep learning models have an excellent potential for correctly identifying the covid positive or covid negative using CXR images. © 2023 IEEE.

17.
2022 IEEE Information Technologies and Smart Industrial Systems, ITSIS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-20242116

ABSTRACT

The main purpose of this paper was to classify if subject has a COVID-19 or not base on CT scan. CNN and resNet-101 neural network architectures are used to identify the coronavirus. The experimental results showed that the two models CNN and resNet-101 can identify accurately the patients have COVID-19 from others with an excellent accuracy of 83.97 % and 90.05 % respectively. The results demonstrates the best ability of the used models in the current application domain. © 2022 IEEE.

18.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12470, 2023.
Article in English | Scopus | ID: covidwho-20241885

ABSTRACT

Stroke is a leading cause of morbidity and mortality throughout the world. Three-dimensional ultrasound (3DUS) imaging was shown to be more sensitive to treatment effect and more accurate in stratifying stroke risk than two-dimensional ultrasound (2DUS) imaging. Point-of-care ultrasound screening (POCUS) is important for patients with limited mobility and at times when the patients have limited access to the ultrasound scanning room, such as in the COVID-19 era. We used an optical tracking system to track the 3D position and orientation of the 2DUS frames acquired by a commercial wireless ultrasound system and subsequently reconstructed a 3DUS image from these frames. The tracking requires spatial and temporal calibrations. Spatial calibration is required to determine the spatial relationship between the 2DUS machine and the tracking system. Spatial calibration was achieved by localizing the landmarks with known coordinates in a custom-designed Z-fiducial phantom in an 2DUS image. Temporal calibration is needed to synchronize the clock of the wireless ultrasound system and the optical tracking system so that position and orientation detected by the optical tracking system can be registered to the corresponding 2DUS frame. Temporal calibration was achieved by initiating the scanning by an abrupt motion that can be readily detected in both systems. This abrupt motion establishes a common reference time point, thereby synchronizing the clock in both systems. We demonstrated that the system can be used to visualize the three-dimensional structure of a carotid phantom. The error rate of the measurements is 2.3%. Upon in-vivo validation, this system will allow POCUS carotid scanning in clinical research and practices. © 2023 SPIE.

19.
Conference Proceedings - IEEE SOUTHEASTCON ; 2023-April:877-882, 2023.
Article in English | Scopus | ID: covidwho-20241538

ABSTRACT

Automated face recognition is a widely adopted machine learning technology for contactless identification of people in various processes such as automated border control, secure login to electronic devices, community surveillance, tracking school attendance, workplace clock in and clock out. Using face masks have become crucial in our daily life with the recent world-wide COVID-19 pandemic. The use of face masks causes the performance of conventional face recognition technologies to degrade considerably. The effect of mask-wearing in face recognition is yet an understudied issue. In this paper, we address this issue by evaluating the performance of a number of face recognition models which are tested by identifying masked and unmasked face images. We use six conventional machine learning algorithms, which are SVC, KNN, LDA, DT, LR and NB, to find out the ones which perform best, besides the ones which poorly perform, in the presence of masked face images. Local Binary Pattern (LBP) is utilized as the feature extraction operator. We generated and used synthesized masked face images. We prepared unmasked, masked, and half-masked training datasets and evaluated the face recognition performance against both masked and unmasked images to present a broad view of this crucial problem. We believe that our study is unique in elaborating the mask-aware facial recognition with almost all possible scenarios including half_masked-to-masked and half_masked-to-unmasked besides evaluating a larger number of conventional machine learning algorithms compared the other studies in the literature. © 2023 IEEE.

20.
2023 3rd International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies, ICAECT 2023 ; 2023.
Article in English | Scopus | ID: covidwho-20241226

ABSTRACT

In December 2019, several cases of pneumonia caused by SARS-CoV-2 were identified in the city of Wuhan (China), which was declared by the WHO as a pandemic in March 2020 because it caused enormous problems to public health due to its rapid transmission of contagion. Being an uncontrolled case, precautions were taken all over the world to moderate the coronavirus that undoubtedly was very deadly for any person, presenting several symptoms, among them we have fever as a common symptom. A biosecurity measure that is frequently used is the taking of temperature with an infrared thermometer, which is not well seen by some specialists due to the error they present, therefore, it would not represent a safe measurement. In view of this problem, in this article a thermal image processing system was made for the measurement of body temperature by means of a drone to obtain the value of body temperature accurately, being able to be implemented anywhere, where it is intended to make such measurement, helping to combat the spread of the virus that currently continues to affect many people. Through the development of the system, the tests were conducted with various people, obtaining a more accurate measurement of body temperature with an efficiency of 98.46% at 1.45 m between the drone and the person, in such a way that if it presents a body temperature higher than 38° C it could be infected with COVID-19. © 2023 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL